Total error vs. measurement uncertainty: the match continues.
نویسندگان
چکیده
Although 5 years have passed since the publication of a previous editorial in this journal that tried to reconcile the concepts [1], laboratory professionals are still playing the total error (TE) vs. uncertainty game. Two papers published in this issue of Clinical Chemistry and Laboratory Medicine further discuss the subject [2, 3], which was revitalized by the organization of the 1st EFLM Strategic Conference in Milan on the analytical performance specifications at the end of 2014 [4]. The organizers considered it timely to address the topic of performance specifications both because it was 15 years since it was previously addressed and because performance specifications are more and more central for the clinical application of test measurements [5, 6]. Being members of the scientific program committee of the Milan conference, we could be biased in commenting on papers dealing with the conference topics. However, it appears useful to add some thoughts here. We should first underline that the consensus statement from this conference in itself does not address the problem of TE vs. uncertainty [7]. This is indeed outside the scope of the statement as it concentrates on models to set performance specifications. The TE and measurement uncertainty concepts were discussed by some of the speakers at the conference and taken forward by one of the Task and Finish Groups (TFG) established during the conference [4]. Westgard’s opinion paper contains some criticisms that deserve clarifications [2]. Similarly to a previously published paper on the same topic [8], the paper is mainly a defense of the TE concept the same author promoted 40 years ago more than an objective comment on the need to improve the definition and the quality of performance specifications, which was the main topic of the Milan conference. We note some misconceptions in the interpretation of the Milan consensus [7], for example, when the author considers the “Milan simplification from 5 to 3 levels of models a modest adjustment of the Stockholm guidance”. The three models proposed in the consensus document from Milan are not necessarily a hierarchy. The recommendation is to use model 1 or 2 depending on the nature and characteristics of the measurands and depending on their clinical application (i.e. diagnosis, monitoring, etc.). Accordingly, Westgard is ignoring the need, which explains the creation of a TFG by EFLM [4], to allocate different tests or clinical applications of the same test to different models for performance specifications recognized in the consensus statement of the Milan conference, preferentially 1 or 2. Another ignored aspect is the quality of the information, which in our view is one of the novelties of the Milan conference. In Westgard’s paper, biological variability is strongly promoted as a source of analytical performance specifications. This can be a good and usable model for many measurands, but it must be underlined that there is a clear need for evidence-based data and that studies and papers with information about biological variation have to be critically appraised [9, 10]. The utility to elaborate specifications at different levels of quality (i.e. minimum, desirable and optimum) to move, in case, from desirable to minimum quality goals and, in the meantime, ask IVD companies to work for improving the quality of assay performance is also an important aspect [11]. Finally, a confounding approach in the paper of Westgard [2] is the tendency to mix statements contained in the individual papers published in the conference proceedings with agreed concepts that became part of the consensus statement. An example is represented by the content of Petersen’s paper [12], which is considered “tout court” the “preference” of the Milan audience. It should be underlined that each of the published papers from the Milan conference represents the opinion of their authors and that there is only one “consensus” paper from the conference [7]. Referring to the TE utility, in his paper Westgard tries to create a sort of “black and white” situation between TE and measurement uncertainty. If we agree that TE may still be useful, for example, in evaluating single EQAS results, there is also a need that all stakeholders working in the field of Laboratory Medicine look carefully to the measurement uncertainty as this help very much in the identification of important sources of
منابع مشابه
Statistical analysis of the precision of the Match method
The Match method quantifies chemical ozone loss in the polar stratosphere. The basic idea consists in calculating the forward trajectory of an air parcel that has been probed by an ozone measurement (e.g., by an ozonesonde or satellite instrument) and finding a second ozone measurement close to this trajectory. Such an event is called a “match”. A rate of chemical ozone destruction can be obtai...
متن کاملEstimation of uncertainty measurement - A prerequisite of ISO1589 accreditation for clinical laboratories.
OBJECTIVE To estimate relative expanded uncertainty measurement of routine clinical chemistry analytes for international organisation for standardisation 15189 accreditation. METHODS This cross-sectional study was conducted at Dow International Medical College, Karachi, from September 2013 to May 2014. During the process of international organisation for standardisation 15189 accreditation, m...
متن کاملRobust Identification of Smart Foam Using Set Mem-bership Estimation in A Model Error Modeling Frame-work
The aim of this paper is robust identification of smart foam, as an electroacoustic transducer, considering unmodeled dynamics due to nonlinearities in behaviour at low frequencies and measurement noise at high frequencies as existent uncertainties. Set membership estimation combined with model error modelling technique is used where the approach is based on worst case scenario with unknown but...
متن کاملModeling Critical Flow through Choke for a Gas-condensate Reservoir Based on Drill Stem Test Data
Gas-condensate reservoirs contain hydrocarbon fluids with characteristics between oil and gas reservoirs and a high gas-liquid ratio. Due to the large gas-liquid ratio, wellhead choke calculations using the empirical equations such as Gilbert may contain considerable error. In this study, using drill stem test (DST) data of a gas-condensate reservoir, coefficients of Gilbert equation was modifi...
متن کاملبررسی عدم قطعیت مدل موازنه جرمی برای تخمین نرخ فرایندهای هوازی در محل دفن پسماندهای شهری
Background and Objective: The aim of this study was to assess the sensitivity and uncertainty analysis of a mass balance model to estimate the rate of aerobic processes in a landfill. Materials and Methods: Monte Carlo simulation is a common method to evaluate uncertainty of the results of a model. Here, we used a Monte Carlo (MC) simulation. The data obtained from the experiments were used as...
متن کاملError Analysis, Design and Modeling of an Improved Heterodyne Nano-Displacement Interferometer
A new heterodyne nano-displacement with error reduction is presented. The main errors affecting the displacement accuracy of the nano-displacement measurement system including intermodulation distortion error, cross-talk error, cross-polarization error and phase detection error are calculated. In the designed system, a He-Ne laser having three-longitudinal-mode is considered as the stabiliz...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Clinical chemistry and laboratory medicine
دوره 54 2 شماره
صفحات -
تاریخ انتشار 2016